Improving Adaptive Replacement Cache (ARC) by Reuse Distance

نویسندگان

  • Woojoong Lee
  • Sejin Park
  • Baegjae Sung
  • Chanik Park
چکیده

Buffer caches are used to enhance the performance of file or storage systems by reducing I/O requests to underlying storage media. In particular, multi-level buffer cache hierarchy is commonly deployed on network file systems or storage systems. In this environment, the I/O access pattern on second-level buffer caches of file servers or storage controllers differs from that on upperlevel caches. The reuse distance of a block is an important metric to characterize I/O access pattern. It is defined as the number of requests between two adjacent accesses to a block in an I/O stream. In [1, 2], Zhou et al. showed that the access pattern on second-level buffer caches has a hill-shaped reuse-distance distribution. This implies that two consecutive accesses to a data block have a relatively long temporal distance due to upper-level cache behavior. They also examined the behavior of the access patterns in terms of frequency, and revealed that the more frequently the block is accessed, the larger portion it takes out of total accesses. For second-level buffer caches, various techniques including frequency-based block prioritizing [1, 2], exclusiveness [3, 4], or multi-level cache coordination [5], have been proposed with consideration of the temporal or frequency pattern of I/O workload on the multi-level cache hierarchy. However, the complexity issue remains still unsatisfactory. The Adaptive Replacement Cache (ARC) algorithm proposed by Megiddo et al. [6, 7] dynamically balances recency and frequency by using two Least-RecentlyUsed (LRU) queues in response to changing access patterns. ARC is simple to implement and has low computational overhead while performing well across varied workloads. ARC not only outperforms most online algorithms, such as LRU, Frequency-Based Replacement (FBR) [8], Least-Frequently-Used (LFU) [9], and Low Inter-reference Recency Set (LIRS) [10], but is also comparable to offline algorithms LRU-2 [11], 2-Queue (2Q) [12], and Least-Recently/FrequentlyUsed (LRFU) [13]. However, because ARC does not take into account the reuse distance of I/O requests, ARC cannot perform efficiently on a second-level cache. Note that the reuse distances of most I/O accesses on the second-level cache will be long so that the recency queue of ARC cannot contribute much to the cache hits. Furthermore, due to the long reuse distance of I/O requests on the secondlevel cache, most cache hits will be observed near the LRU (not MRU) position of the recency queue in ARC. If cache hits are observed near the LRU position of the recency queue, ARC tries to increase the size of the recency queue in order to capture the recency locality more. Accordingly, the size of the frequency queue decreases. It becomes worse when the second-level cache size is equal to or smaller than the first-level cache size. In this case, the recency queue of ARC contributes nothing to the cache hits. For more details, refer to [6, 7].

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

FRD: A Filtering based Buffer Cache Algorithm that Considers both Frequency and Reuse Distance

Buffer cache algorithms play a major role in filling the large performance gap between main memory and I/O devices in a mass storage system. Many buffer cache algorithms have been developed such as the low inter-reference recency set (LIRS) and adaptive replacement cache (ARC). Careful analysis of real-world workloads leads us to observe that approximately 50 to 90% blocks are accessed three or...

متن کامل

Analyzing Adaptive Cache Replacement Strategies

Adaptive Replacement Cache (Arc) and CLOCK with Adaptive Replacement (Car) are state-of-theart “adaptive” cache replacement algorithms invented to improve on the shortcomings of classical cache replacement policies such as Lru, Lfu and Clock. By separating out items that have been accessed only once and items that have been accessed more frequently, both Arc and Car are able to control the harm...

متن کامل

To ARC or Not to ARC

Cache replacement algorithms have focused on managing caches that are in the datapath. In datapath caches, every cache miss results in a cache update. Cache updates are expensive because they induce cache insertion and cache eviction overheads which can be detrimental to both cache performance and cache device lifetime. Nondatapath caches, such as host-side flash caches, allow the flexibility o...

متن کامل

ARC: A Self-Tuning, Low Overhead Replacement Cache

We consider the problem of cache management in a demand paging scenario with uniform page sizes. We propose a new cache management policy, namely, Adaptive Replacement Cache (ARC), that has several advantages. In response to evolving and changing access patterns, ARC dynamically, adaptively, and continually balances between the recency and frequency components in an online and selftuning fashio...

متن کامل

Cache Replacement Policy Using Map-based Adaptive Insertion

In this paper, we propose a map-based adaptive insertion policy (MAIP) for a novel cache replacement. The MAIP estimates the data reuse possibility on the basis of data reuse history. To track data reuse history, the MAIP employs a bitmap data structure, which we call memory access map. The memory access map holds all memory accessed locations in a fixed sized memory area to detect the data reu...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011